151 research outputs found

    The Skipping Behavior of Users of Music Streaming Services and its Relation to Musical Structure

    Full text link
    The behavior of users of music streaming services is investigated from the point of view of the temporal dimension of individual songs; specifically, the main object of the analysis is the point in time within a song at which users stop listening and start streaming another song ("skip"). The main contribution of this study is the ascertainment of a correlation between the distribution in time of skipping events and the musical structure of songs. It is also shown that such distribution is not only specific to the individual songs, but also independent of the cohort of users and, under stationary conditions, date of observation. Finally, user behavioral data is used to train a predictor of the musical structure of a song solely from its acoustic content; it is shown that the use of such data, available in large quantities to music streaming services, yields significant improvements in accuracy over the customary fashion of training this class of algorithms, in which only smaller amounts of hand-labeled data are available

    Maximum entropy models capture melodic styles

    Full text link
    We introduce a Maximum Entropy model able to capture the statistics of melodies in music. The model can be used to generate new melodies that emulate the style of the musical corpus which was used to train it. Instead of using the n−n-body interactions of (n−1)−(n-1)-order Markov models, traditionally used in automatic music generation, we use a k−k-nearest neighbour model with pairwise interactions only. In that way, we keep the number of parameters low and avoid over-fitting problems typical of Markov models. We show that long-range musical phrases don't need to be explicitly enforced using high-order Markov interactions, but can instead emerge from multiple, competing, pairwise interactions. We validate our Maximum Entropy model by contrasting how much the generated sequences capture the style of the original corpus without plagiarizing it. To this end we use a data-compression approach to discriminate the levels of borrowing and innovation featured by the artificial sequences. The results show that our modelling scheme outperforms both fixed-order and variable-order Markov models. This shows that, despite being based only on pairwise interactions, this Maximum Entropy scheme opens the possibility to generate musically sensible alterations of the original phrases, providing a way to generate innovation

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Régularité, génération de documents, et Cyc

    Get PDF
    Nous nous intĂ©ressons Ă  la modĂ©lisation des rĂ©seaux hiĂ©rarchiques, et avons dĂ©veloppĂ© un modĂšle de hiĂ©rarchies sĂ©mantiques basĂ© sur la RÉGULARITÉ, une gĂ©nĂ©ralisation de l’hĂ©ritage [MIL 88a]. Nous nous intĂ©ressons Ă©galement Ă  la gĂ©nĂ©ration de documents sĂ©quentiels structurĂ©s Ă  partir de documents hypertextes en utilisant la sĂ©mantique des liens hypertextes pour structurer la prĂ©sentation [MIL 90b]. Nous avons acquis une copie de la base de connaissances CYC [LEN 90a] dans le but de: 1) utiliser le rĂ©seau sĂ©mantique sous-jacent Ă  CYC pour aider Ă  la gĂ©nĂ©ration de textes, et 2) tester\ud l’hypothĂšse de la rĂ©gularitĂ©. Ironiquement, la taille gigantesque de CYC a forcĂ© ses concepteurs d’adopter des optimisations d’implantation qui la rendent peu adaptĂ©e aux explorations logiques profondes requises par la gĂ©nĂ©ration de textes. Par ailleurs, l’étude des patrons de rĂ©gularitĂ© dans CYC nous a amenĂ© Ă  gĂ©nĂ©raliser la notion de rĂ©gularitĂ©, et Ă  formuler un certain nombre d’hypothĂšses quant Ă  la structure logique de la base de connaissances

    An Object-Oriented Representation of Pitch- Classes, Intervals, Scales and Chords

    Get PDF
    Le systĂšme MusES a comme objectif de reprĂ©senter les connaissances musicales nĂ©cessaires Ă  l'analyse harmonique de sĂ©quences d'accords en musique tonale. Nous dĂ©crivons ici la premiĂšre couche du systĂšme qui propose une reprĂ©sentation opĂ©rationnelle des notes et de leur algĂšbre, ainsi que des intervalles, gammes et accords. Cette reprĂ©sentation a comme particularitĂ© de prendre en compte les problĂšmes d'enharmonie, i.e. de diffĂ©rencier les notes Ă©quivalentes comme Do# et RĂ©b. Cette premiĂšre couche est utilisĂ©e pour l'Ă©tude de mĂ©canismes d'analyse harmonique et peut ĂȘtre considĂ©rĂ©e comme une ontologie des concepts de base de l'harmonie. Le but de ce document est aussi de proposer un exemple non trivial d'application de Smalltalk-80 Ă  l'usage des musiciens dĂ©sirant se lancer dans la programmation par objets. Un document plus dĂ©taillĂ© sur le systĂšme est disponible sur demande.The MusES system is intended to provide an explicit representation of musical knowledge involved in tonal music chord sequences analysis. We describe in this paper the first layer of the system, which provides an operational representation of pitch classes and their algebra, as well as standard calculus on scales, intervals and chords. The proposed representation takes enharmonic spelling into account, i.e differentiates between equivalent pitch-classes (e.g. C# and Db). This first layer is intended to provide a solid foundation for musical symbolic knowledge-based systems. As such, it provides an ontology to describe the basic units of harmony. This first layer of the MusES system may also be used as a pedagogical example for those wishing to apply object-oriented techniques to musical knowledge representation. A document describing the system in full details is available on request

    Real-Time Score Notation from Raw MIDI Inputs

    Get PDF
    This paper describes tools designed and experiments conducted in the context of MIROR, a European project investigating adaptive systems for early childhood music education based on the paradigm of reflexive interaction. In MIROR, music notation is used as the trace of both the user and the system activity, produced from MIDI instruments. The task of displaying such raw MIDI inputs and outputs is difficult as no a priori information is known concerning the underlying tempo or metrical structure. We describe here a completely automatic processing chain from the raw MIDI input to a fully-fledge music notation. The low level music description is first converted in a score level description and then automatically rendered as a graphic score. The whole process is operating in real-time. The paper describes the various conversion steps and issues, including extensions to support score annotations. The process is validated using about 30,000 musical sequences gathered from MIROR experiments and made available for public use
    • 

    corecore